Natural Language Processing (NLP) has been revolutionized by the use of Pre-trained Language Models (PLMs) such as BERT. Despite setting new records in nearly every NLP task, PLMs still face a number of challenges including poor interpretability, weak reasoning capability, and the need for a lot of expensive annotated data when applied to downstream tasks. By integrating external knowledge into PLMs, \textit{\underline{K}nowledge-\underline{E}nhanced \underline{P}re-trained \underline{L}anguage \underline{M}odels} (KEPLMs) have the potential to overcome the above-mentioned limitations. In this paper, we examine KEPLMs systematically through a series of studies. Specifically, we outline the common types and different formats of knowledge to be integrated into KEPLMs, detail the existing methods for building and evaluating KEPLMS, present the applications of KEPLMs in downstream tasks, and discuss the future research directions. Researchers will benefit from this survey by gaining a quick and comprehensive overview of the latest developments in this field.
translated by 谷歌翻译
Deep learning-based methods have achieved significant performance for image defogging. However, existing methods are mainly developed for land scenes and perform poorly when dealing with overwater foggy images, since overwater scenes typically contain large expanses of sky and water. In this work, we propose a Prior map Guided CycleGAN (PG-CycleGAN) for defogging of images with overwater scenes. To promote the recovery of the objects on water in the image, two loss functions are exploited for the network where a prior map is designed to invert the dark channel and the min-max normalization is used to suppress the sky and emphasize objects. However, due to the unpaired training set, the network may learn an under-constrained domain mapping from foggy to fog-free image, leading to artifacts and loss of details. Thus, we propose an intuitive Upscaling Inception Module (UIM) and a Long-range Residual Coarse-to-fine framework (LRC) to mitigate this issue. Extensive experiments on qualitative and quantitative comparisons demonstrate that the proposed method outperforms the state-of-the-art supervised, semi-supervised, and unsupervised defogging approaches.
translated by 谷歌翻译
Code generation models have achieved impressive performance. However, they tend to be brittle as slight edits to a prompt could lead to very different generations; these robustness properties, critical for user experience when deployed in real-life applications, are not well understood. Most existing works on robustness in text or code tasks have focused on classification, while robustness in generation tasks is an uncharted area and to date there is no comprehensive benchmark for robustness in code generation. In this paper, we propose ReCode, a comprehensive robustness evaluation benchmark for code generation models. We customize over 30 transformations specifically for code on docstrings, function and variable names, code syntax, and code format. They are carefully designed to be natural in real-life coding practice, preserve the original semantic meaning, and thus provide multifaceted assessments of a model's robustness performance. With human annotators, we verified that over 90% of the perturbed prompts do not alter the semantic meaning of the original prompt. In addition, we define robustness metrics for code generation models considering the worst-case behavior under each type of perturbation, taking advantage of the fact that executing the generated code can serve as objective evaluation. We demonstrate ReCode on SOTA models using HumanEval, MBPP, as well as function completion tasks derived from them. Interesting observations include: better robustness for CodeGen over InCoder and GPT-J; models are most sensitive to syntax perturbations; more challenging robustness evaluation on MBPP over HumanEval.
translated by 谷歌翻译
Disentangled representation learning remains challenging as ground truth factors of variation do not naturally exist. To address this, we present Vocabulary Disentanglement Retrieval~(VDR), a simple yet effective retrieval-based disentanglement framework that leverages nature language as distant supervision. Our approach is built upon the widely-used bi-encoder architecture with disentanglement heads and is trained on data-text pairs that are readily available on the web or in existing datasets. This makes our approach task- and modality-agnostic with potential for a wide range of downstream applications. We conduct experiments on 16 datasets in both text-to-text and cross-modal scenarios and evaluate VDR in a zero-shot setting. With the incorporation of disentanglement heads and a minor increase in parameters, VDR achieves significant improvements over the base retriever it is built upon, with a 9% higher on NDCG@10 scores in zero-shot text-to-text retrieval and an average of 13% higher recall in cross-modal retrieval. In comparison to other baselines, VDR outperforms them in most tasks, while also improving explainability and efficiency.
translated by 谷歌翻译
Unsupervised domain adaptation (UDA) has been highly successful in transferring knowledge acquired from a label-rich source domain to a label-scarce target domain. Open-set domain adaptation (ODA) and universal domain adaptation (UNDA) have been proposed as solutions to the problem concerning the presence of additional novel categories in the target domain. Existing ODA and UNDA approaches treat all novel categories as one unified unknown class and attempt to detect this unknown class during the training process. We find that domain variance leads to more significant view-noise in unsupervised data augmentation, affecting the further applications of contrastive learning~(CL), as well as the current closed-set classifier and open-set classifier causing the model to be overconfident in novel class discovery. To address the above two issues, we propose Soft-contrastive All-in-one Network~(SAN) for ODA and UNDA tasks. SAN includes a novel data-augmentation-based CL loss, which is used to improve the representational capability, and a more human-intuitive classifier, which is used to improve the new class discovery capability. The soft contrastive learning~(SCL) loss is used to weaken the adverse effects of the data-augmentation label noise problem, which is amplified in domain transfer. The All-in-One~(AIO) classifier overcomes the overconfidence problem of the current mainstream closed-set classifier and open-set classifier in a more human-intuitive way. The visualization results and ablation experiments demonstrate the importance of the two proposed innovations. Moreover, extensive experimental results on ODA and UNDA show that SAN has advantages over the existing state-of-the-art methods.
translated by 谷歌翻译
Vision Transformer (ViT) has emerged as a competitive alternative to convolutional neural networks for various computer vision applications. Specifically, ViT multi-head attention layers make it possible to embed information globally across the overall image. Nevertheless, computing and storing such attention matrices incurs a quadratic cost dependency on the number of patches, limiting its achievable efficiency and scalability and prohibiting more extensive real-world ViT applications on resource-constrained devices. Sparse attention has been shown to be a promising direction for improving hardware acceleration efficiency for NLP models. However, a systematic counterpart approach is still missing for accelerating ViT models. To close the above gap, we propose a first-of-its-kind algorithm-hardware codesigned framework, dubbed ViTALiTy, for boosting the inference efficiency of ViTs. Unlike sparsity-based Transformer accelerators for NLP, ViTALiTy unifies both low-rank and sparse components of the attention in ViTs. At the algorithm level, we approximate the dot-product softmax operation via first-order Taylor attention with row-mean centering as the low-rank component to linearize the cost of attention blocks and further boost the accuracy by incorporating a sparsity-based regularization. At the hardware level, we develop a dedicated accelerator to better leverage the resulting workload and pipeline from ViTALiTy's linear Taylor attention which requires the execution of only the low-rank component, to further boost the hardware efficiency. Extensive experiments and ablation studies validate that ViTALiTy offers boosted end-to-end efficiency (e.g., $3\times$ faster and $3\times$ energy-efficient) under comparable accuracy, with respect to the state-of-the-art solution.
translated by 谷歌翻译
With the development of depth sensors in recent years, RGBD object tracking has received significant attention. Compared with the traditional RGB object tracking, the addition of the depth modality can effectively solve the target and background interference. However, some existing RGBD trackers use the two modalities separately and thus some particularly useful shared information between them is ignored. On the other hand, some methods attempt to fuse the two modalities by treating them equally, resulting in the missing of modality-specific features. To tackle these limitations, we propose a novel Dual-fused Modality-aware Tracker (termed DMTracker) which aims to learn informative and discriminative representations of the target objects for robust RGBD tracking. The first fusion module focuses on extracting the shared information between modalities based on cross-modal attention. The second aims at integrating the RGB-specific and depth-specific information to enhance the fused features. By fusing both the modality-shared and modality-specific information in a modality-aware scheme, our DMTracker can learn discriminative representations in complex tracking scenes. Experiments show that our proposed tracker achieves very promising results on challenging RGBD benchmarks. Code is available at \url{https://github.com/ShangGaoG/DMTracker}.
translated by 谷歌翻译
个性化的自然语言生成可解释的建议在证明为什么建议可能与用户的兴趣相匹配的原因中起着关键作用。现有模型通常通过软约束(例如〜方面计划)来控制发电过程。在有希望的同时,这些方法难以正确地生成特定的信息,这阻止了产生的解释内容丰富和多样化。在本文中,我们提出了UCEPIC,这是一个解释生成模型,该模型统一了可控个性化生成的方面计划和词汇约束。具体而言,我们首先通过提出的强大插入过程预先培训非人性化的文本生成器,以便模型能够生成包含词汇约束的句子。然后,我们演示了将方面计划和个性化引用纳入插入过程的方法,以获得个性化的解释。与先前由软限制控制的工作相比,UCEPIC结合了来自钥匙拼的特定信息,然后很大程度上提高了生成的解释的多样性和信息性。对RateBeer和Yelp的广泛实验表明,UCEPIC可以为建议产生高质量和不同的解释。
translated by 谷歌翻译
太阳耀斑,尤其是M级和X级耀斑,通常与冠状质量弹出(CMES)有关。它们是太空天气影响的最重要来源,可能会严重影响近地环境。因此,必须预测耀斑(尤其是X级),以减轻其破坏性和危险后果。在这里,我们介绍了几种统计和机器学习方法,以预测AR的耀斑指数(FI),这些方法通过考虑到一定时间间隔内的不同类耀斑的数量来量化AR的耀斑生产力。具体而言,我们的样本包括2010年5月至2017年12月在太阳能磁盘上出现的563个AR。25个磁性参数,由空中震动和磁性成像器(HMI)的太空天气HMI活性区域(Sharp)提供的太阳能动力学观测值(HMI)。 (SDO),表征了代理中存储在ARS中的冠状磁能,并用作预测因子。我们研究了这些尖锐的参数与ARS的FI与机器学习算法(样条回归)和重采样方法(合成少数群体过度采样技术,用于使用高斯噪声回归的合成少数群体过度采样技术,smogn简短)。基于既定关系,我们能够在接下来的1天内预测给定AR的FIS值。与其他4种流行的机器学习算法相比,我们的方法提高了FI预测的准确性,尤其是对于大型FI。此外,我们根据Borda Count方法从由9种不同的机器学习方法渲染的等级计算出尖锐参数的重要性。
translated by 谷歌翻译
流动学习〜(ML)旨在从高维数据中找到低维的嵌入。以前的作品专注于具有简单和理想场景的手工艺品或简单的数据集;但是,我们发现它们在带有不足数据的现实世界数据集上的性能很差。通常,ML方法主要是对数据结构进行建模,并随后处理低维嵌入,在前步骤中,不足采样数据的局部连通性较差,而后来步骤中不适当的优化目标将导致\ emph {结构失真}和\ \ \ \ \ \ \ \ \ \ \ emph {不合适的嵌入}。为了解决这个问题,我们提出了深层局部流动性歧管嵌入(DLME),这是一种新型的ML框架,可通过减少失真来获得可靠的歧管嵌入。我们提出的DLME通过数据增强来构建语义歧管,并在其平滑框架的帮助下克服了\ emph {结构失真}问题。为了克服\ emph {不合适的嵌入},我们为DLME设计了一个特定的损失,并在数学上表明它会根据我们提出的局部平坦度假设导致更合适的嵌入。在实验中,通过显示DLME对具有三种类型的数据集(玩具,生物学和图像)的下游分类,聚类和可视化任务的有效性,我们的实验结果表明,DLME胜过SOTA ML \&Chortantive Learning(CL)方法(CL)方法。
translated by 谷歌翻译